Goto

Collaborating Authors

 autopilot system


Tesla's Autopilot: Ethics and Tragedy

Jatavallabha, Aravinda

arXiv.org Artificial Intelligence

This case study delves into the ethical ramifications of an incident involving Tesla's Autopilot, emphasizing Tesla Motors' moral responsibility. Using a seven-step ethical decision-making process, it examines user behavior, system constraints, and regulatory implications. This incident prompts a broader evaluation of ethical challenges in the automotive industry's adoption of autonomous technologies, urging a reconsideration of industry norms and legal frameworks. The analysis offers a succinct exploration of ethical considerations in evolving technological landscapes.


Cognitive Kernel: An Open-source Agent System towards Generalist Autopilots

Zhang, Hongming, Pan, Xiaoman, Wang, Hongwei, Ma, Kaixin, Yu, Wenhao, Yu, Dong

arXiv.org Artificial Intelligence

We introduce Cognitive Kernel, an open-source agent system towards the goal of generalist autopilots. Unlike copilot systems, which primarily rely on users to provide essential state information (e.g., task descriptions) and assist users by answering questions or auto-completing contents, autopilot systems must complete tasks from start to finish independently, which requires the system to acquire the state information from the environments actively. To achieve this, an autopilot system should be capable of understanding user intents, actively gathering necessary information from various real-world sources, and making wise decisions. Cognitive Kernel adopts a model-centric design. In our implementation, the central policy model (a fine-tuned LLM) initiates interactions with the environment using a combination of atomic actions, such as opening files, clicking buttons, saving intermediate results to memory, or calling the LLM itself. This differs from the widely used environment-centric design, where a task-specific environment with predefined actions is fixed, and the policy model is limited to selecting the correct action from a given set of options. Our design facilitates seamless information flow across various sources and provides greater flexibility. We evaluate our system in three use cases: real-time information management, private information management, and long-term memory management. The results demonstrate that Cognitive Kernel achieves better or comparable performance to other closed-source systems in these scenarios. Cognitive Kernel is fully dockerized, ensuring everyone can deploy it privately and securely. We open-source the system and the backbone model to encourage further research on LLM-driven autopilot systems.


Heart-stopping moment Tesla owner nearly plows into a moving TRAIN in 'self-drive' mode (and he says it wasn't the first time!)

Daily Mail - Science & tech

A Tesla owner is blaming his vehicle's Full Self-Driving feature for veering toward an oncoming train before he could intervene. Craig Doty II, from Ohio, was driving down a road at night earlier this month when dashcam footage showed his Tesla quickly approaching a passing train with no sign of slowing down. He claimed his vehicle was in Full Self-Driving (FSD) mode at the time and didn't slow down despite the train crossing the road - but did not specify the make or model of the car. In the video, the driver appears to have been forced to intervene by veering right through the railway crossing sign and coming to a stop mere feet from the moving train. Tesla has faced numerous lawsuits from owners who claimed the FSD or Autopilot feature caused them to crash because it didn't stop for another vehicle or swerved into an object, in some cases claiming the lives of drivers involved.


Virginia sheriff's office determines that Tesla's Autopilot was on during tractor-trailer crash

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Virginia authorities have determined that a Tesla was operating on its Autopilot system and was speeding in the moments leading to a crash with a crossing tractor-trailer last July that killed the Tesla driver. The death of Pablo Teodoro III, 57, is the third since 2016 in which a Tesla that was using Autopilot ran underneath a crossing tractor-trailer, raising questions about the partially automated system's safety and where it should be allowed to operate. The crash south of Washington remains under investigation by the U.S. National Highway Traffic Safety Administration, which sent investigators to Virginia last summer and began a broader probe of Autopilot more than two years ago.


Tesla recalls more than 2m vehicles in US over Autopilot system

The Guardian

Tesla is recalling just over 2m vehicles in the United States fitted with its Autopilot advanced driver-assistance system to install new safeguards, after a safety regulator said the system was open to "foreseeable misuse". The National Highway Traffic Safety Administration (NHTSA) has been investigating the electric automaker led by the billionaire Elon Musk for more than two years over whether Tesla vehicles adequately ensure that drivers pay attention when using the driver assistance system. Tesla said in the recall filing that Autopilot's software system controls "may not be sufficient to prevent driver misuse" and could increase the risk of a crash. The acting NHTSA administrator, Ann Carlson, told Reuters in August it was "really important that driver monitoring systems take into account that humans over-trust technology". Tesla's Autopilot is intended to enable cars to steer, accelerate and brake automatically within their lane, while enhanced Autopilot can assist in changing lanes on highways but does not make them autonomous.


Judge finds 'reasonable evidence' Tesla knew self-driving tech was defective

The Guardian

A judge has found "reasonable evidence" that Elon Musk and other executives at Tesla knew that the company's self-driving technology was defective but still allowed the cars to be driven in an unsafe manner anyway, according to a recent ruling issued in Florida. Palm Beach county circuit court judge Reid Scott said he'd found evidence that Tesla "engaged in a marketing strategy that painted the products as autonomous" and that Musk's public statements about the technology "had a significant effect on the belief about the capabilities of the products". The ruling, reported by Reuters on Wednesday, clears the way for a lawsuit over a fatal crash in 2019 north of Miami involving a Tesla Model 3. The vehicle crashed into an 18-wheeler truck that had turned on to the road into the path of driver Stephen Banner, shearing off the Tesla's roof and killing Banner. The lawsuit, brought by Banner's wife, accuses the company of intentional misconduct and gross negligence, which could expose Tesla to punitive damages. The ruling comes after Tesla won two product liability lawsuits in California earlier this year focused on alleged defects in its Autopilot system.


The Morning After: US government announces AI Safety Institute

Engadget

Following President Joe Biden's sweeping executive order regarding AI development last week, at the UK AI Safety Summit yesterday, Vice President Kamala Harris announced even more machine learning initiatives, as well as the establishment of the United States AI Safety Institute. In cooperation with the Department of Commerce, the Biden administration will establish the United States AI Safety Institute (US AISI) within the NIST (National Institute of Standards and Technology). It will be responsible for creating and publishing guidelines, benchmark tests, best practices and more, for evaluating potentially dangerous AI systems. Tests may even include the red-team exercises President Biden mentioned in his executive order last week. The Office of Management and Budget (OMB) -- it's an acronym-heavy morning -- will release the administration's first draft policy guidance on government AI use later this week.


Tesla's Autopilot was not to blame for fatal 2019 Model 3 crash, jury finds

Engadget

A California jury has found that Tesla was not at fault for a fatal 2019 crash that allegedly involved its Autopilot system, in the first US trial yet for a case claiming its software directly caused a death. The lawsuit alleged Tesla knowingly shipped out cars with a defective Autopilot system, leading to a crash that killed a Model 3 owner and severely injured two passengers, Reuters reports. Per the lawsuit, 37-year-old Micah Lee was driving his Tesla Model 3 on a highway outside of Los Angeles at 65 miles per hour when it turned sharply off the road and slammed into a palm tree before catching fire. Lee died in the crash. The company was sued for $400 million plus punitive damages by Lee's estate and the two surviving victims, including a boy who was 8 years old at the time and was disemboweled in the accident, according to an earlier report from Reuters.


Tesla trial begins over whether 'experimental' autopilot caused driver's death

The Guardian

The lawyer representing victims of a fatal Tesla crash blamed the company's autopilot driver assistant system, saying that "a car company should never sell consumers experimental vehicles," in the opening statement of a California trial on Thursday. The case stems from a civil lawsuit alleging that the autopilot system caused the owner of a Tesla Model 3 car, Micah Lee, to suddenly veer off a highway east of Los Angeles at 65 mph (105 kph), where his car struck a palm tree and burst into flames. The 2019 crash killed Lee and seriously injured his two passengers, including an eight-year-old boy who was disemboweled, according to court documents. The lawsuit, filed against Tesla by the passengers and Lee's estate, accuses Tesla of knowing that autopilot and other safety systems were defective when it sold the car. Jonathan Michaels, an attorney for the plaintiffs, in his opening statement at the trial in Riverside, California, said that when the 37-year-old Lee bought Tesla's "full self-driving capability package" for $6,000 for his Model 3 in 2019, the system was in "beta", meaning it was not yet ready for release.


Elon Musk and others urge AI pause, citing 'risks to society'

#artificialintelligence

March 29 (Reuters) - Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society. Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users by engaging them in human-like conversation, composing songs and summarising lengthy documents. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," said the letter issued by the Future of Life Institute. The non-profit is primarily funded by the Musk Foundation, as well as London-based group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union's transparency register. "AI stresses me out," Musk said earlier this month.